紧张的机器人由刚性杆和柔性电缆组成,表现出高强度对重的比率和极端变形,使它们能够驾驭非结构化的地形,甚至可以在严酷的冲击力上生存。但是,由于其高维,复杂的动态和耦合体系结构,它们很难控制。基于物理学的仿真是制定运动策略的途径,然后可以将其转移到真实的机器人中,但是建模时态机器人是一项复杂的任务,因此模拟会经历大量的SIM2REAL间隙。为了解决这个问题,本文介绍了台词机器人的真实2SIM2REAL策略。该策略是基于差异物理引擎的,可以在真正的机器人(即离线测量和一个随机轨迹)中进行有限的数据进行训练,并达到足够高的精度以发现可转移的运动策略。除了整体管道之外,这项工作的主要贡献包括在接触点处计算非零梯度,损失函数和轨迹分割技术,该技术避免了训练期间梯度评估的冲突。在实际的3杆张力机器人上证明并评估了所提出的管道。
translated by 谷歌翻译
机器人操纵的最新工作集中在遮挡下混乱空间中的物体检索。然而,大多数努力都缺乏对方法完整性的条件分析,或者仅在可以从工作空间中删除对象时,这些方法仅适用。这项工作制定了一般的,闭塞感知的操纵任务,并专注于在限制空间内与现场重排的安全对象重建。它提出了一个框架,可确保安全性保证。此外,通过与在模拟中随机生成的实验中的随机和贪婪的基线进行比较,从经验上开发和评估了这种单调实例的抽象框架的实例化。即使对于具有逼真物体的混乱场景,提议的算法也显着超过基准,并在实验条件下保持高成功率。
translated by 谷歌翻译
鉴于存在复杂的动力学和大量DOF,由刚性杆和柔性电缆组成的紧张机器人难以准确地建模和控制。最近已经提出了可微分的物理发动机作为数据驱动的方法,用于模型识别此类复杂的机器人系统。这些发动机通常以高频执行以实现准确的模拟。但是,由于现实世界传感器的局限性,通常在如此高的频率下,通常无法在训练可区分发动机的地面真相轨迹。目前的工作着重于此频率不匹配,这会影响建模准确性。我们为紧张的机器人的可区分物理发动机提出了一个经常性结构,即使使用低频轨迹也可以有效地训练。为了以强大的方式训练这款新的经常性引擎,这项工作相对于先前的工作介绍:(i)一种新的隐式集成方案,(ii)渐进式培训管道,以及(iii)可区分的碰撞检查器。 NASA在Mujoco上的Icosahedron Superballbot的模型被用作收集培训数据的地面真实系统。模拟实验表明,一旦对Mujoco的低频轨迹进行了训练,对复发性可区分发动机进行了训练,它就可以匹配Mujoco系统的行为。成功的标准是,是否可以将使用可区分发动机的运动策略传递回地面真相系统,并导致类似的运动。值得注意的是,训练可区分发动机所需的地面真相数据数量,使该政策可以转移到地面真实系统中,是直接在地面真相系统上训练政策所需的数据的1%。
translated by 谷歌翻译
了解机器人控制器的全球动态,例如识别吸引子及其吸引力区域(ROA),对于安全部署和综合更有效的混合控制器很重要。本文提出了一个拓扑框架,以有效且可解释的方式分析机器人控制器,甚至是数据驱动器的全球动态。它构建了代表基础系统的状态空间和非线性动力学的组合表示形式,该动力学总结在有向的无环图中,即Morse图。该方法仅通过在状态空间离散化上向局部传播短轨迹来探测本地的动力学,这需要是lipschitz的连续函数。对经典机器人基准的数值或数据驱动控制器进行了评估。将其与已建立的分析和最新的机器学习替代方法进行了比较,以估计此类控制器的ROA。证明它在准确性和效率方面表现优于它们。它还提供了更深入的见解,因为它描述了离散化解决方案的全局动态。这允许使用Morse图来识别如何合成控制器以形成改进的混合解决方案或如何识别机器人系统的物理限制。
translated by 谷歌翻译
本文旨在提高用于车辆系统的Kinodynamic规划师的路径质量和计算效率。它提出了一个学习框架,用于在具有动态的系统的基于采样的运动规划仪的扩展过程中识别有前途的控制。离线,学习过程训练,以返回最高质量控制,以便在没有来自其当前状态和局部目标状态之间的输入差异矢量的障碍物的情况下达到局部目标状态(即航点)。数据生成方案在目标色散上提供界限,并使用状态空间修剪以确保高质量控制。通过专注于系统的动态,该过程是数据高效并发生一次动态系统,使其可用于具有模块化扩展功能的不同环境。这项工作与a)将所提出的学习过程集成了一个)探索性扩展功能,该探索性扩展函数在可到达空间上生成有偏见的覆盖范围,B)为移动机器人提出了一种利用的扩展功能,其使用内侧轴信息生成航点。本文评估了第一和二阶差分驱动系统的学习过程和相应的规划仪。结果表明,拟议的学习和规划的整合可以产生比Kinodynamic规划更好的质量路径,随机控制在较少的迭代和计算时间。
translated by 谷歌翻译
Recent progress in geometric computer vision has shown significant advances in reconstruction and novel view rendering from multiple views by capturing the scene as a neural radiance field. Such approaches have changed the paradigm of reconstruction but need a plethora of views and do not make use of object shape priors. On the other hand, deep learning has shown how to use priors in order to infer shape from single images. Such approaches, though, require that the object is reconstructed in a canonical pose or assume that object pose is known during training. In this paper, we address the problem of how to compute equivariant priors for reconstruction from a few images, given the relative poses of the cameras. Our proposed reconstruction is $SE(3)$-gauge equivariant, meaning that it is equivariant to the choice of world frame. To achieve this, we make two novel contributions to light field processing: we define light field convolution and we show how it can be approximated by intra-view $SE(2)$ convolutions because the original light field convolution is computationally and memory-wise intractable; we design a map from the light field to $\mathbb{R}^3$ that is equivariant to the transformation of the world frame and to the rotation of the views. We demonstrate equivariance by obtaining robust results in roto-translated datasets without performing transformation augmentation.
translated by 谷歌翻译
Practical applications of mechanical metamaterials often involve solving inverse problems where the objective is to find the (multiple) microarchitectures that give rise to a given set of properties. The limited resolution of additive manufacturing techniques often requires solving such inverse problems for specific sizes. One should, therefore, find multiple microarchitectural designs that exhibit the desired properties for a specimen with given dimensions. Moreover, the candidate microarchitectures should be resistant to fatigue and fracture, meaning that peak stresses should be minimized as well. Such a multi-objective inverse design problem is formidably difficult to solve but its solution is the key to real-world applications of mechanical metamaterials. Here, we propose a modular approach titled 'Deep-DRAM' that combines four decoupled models, including two deep learning models (DLM), a deep generative model (DGM) based on conditional variational autoencoders (CVAE), and direct finite element (FE) simulations. Deep-DRAM (deep learning for the design of random-network metamaterials) integrates these models into a unified framework capable of finding many solutions to the multi-objective inverse design problem posed here. The integrated framework first introduces the desired elastic properties to the DGM, which returns a set of candidate designs. The candidate designs, together with the target specimen dimensions are then passed to the DLM which predicts their actual elastic properties considering the specimen size. After a filtering step based on the closeness of the actual properties to the desired ones, the last step uses direct FE simulations to identify the designs with the minimum peak stresses.
translated by 谷歌翻译
Recent methods for neural surface representation and rendering, for example NeuS, have demonstrated remarkably high-quality reconstruction of static scenes. However, the training of NeuS takes an extremely long time (8 hours), which makes it almost impossible to apply them to dynamic scenes with thousands of frames. We propose a fast neural surface reconstruction approach, called NeuS2, which achieves two orders of magnitude improvement in terms of acceleration without compromising reconstruction quality. To accelerate the training process, we integrate multi-resolution hash encodings into a neural surface representation and implement our whole algorithm in CUDA. We also present a lightweight calculation of second-order derivatives tailored to our networks (i.e., ReLU-based MLPs), which achieves a factor two speed up. To further stabilize training, a progressive learning strategy is proposed to optimize multi-resolution hash encodings from coarse to fine. In addition, we extend our method for reconstructing dynamic scenes with an incremental training strategy. Our experiments on various datasets demonstrate that NeuS2 significantly outperforms the state-of-the-arts in both surface reconstruction accuracy and training speed. The video is available at https://vcai.mpi-inf.mpg.de/projects/NeuS2/ .
translated by 谷歌翻译
We propose a novel method for 3D shape completion from a partial observation of a point cloud. Existing methods either operate on a global latent code, which limits the expressiveness of their model, or autoregressively estimate the local features, which is highly computationally extensive. Instead, our method estimates the entire local feature field by a single feedforward network by formulating this problem as a tensor completion problem on the feature volume of the object. Due to the redundancy of local feature volumes, this tensor completion problem can be further reduced to estimating the canonical factors of the feature volume. A hierarchical variational autoencoder (VAE) with tiny MLPs is used to probabilistically estimate the canonical factors of the complete feature volume. The effectiveness of the proposed method is validated by comparing it with the state-of-the-art method quantitatively and qualitatively. Further ablation studies also show the need to adopt a hierarchical architecture to capture the multimodal distribution of possible shapes.
translated by 谷歌翻译
As aerial robots are tasked to navigate environments of increased complexity, embedding collision tolerance in their design becomes important. In this survey we review the current state-of-the-art within the niche field of collision-tolerant micro aerial vehicles and present different design approaches identified in the literature, as well as methods that have focused on autonomy functionalities that exploit collision resilience. Subsequently, we discuss the relevance to biological systems and provide our view on key directions of future fruitful research.
translated by 谷歌翻译